324 research outputs found

    Regression Analysis In Longitudinal Studies With Non-ignorable Missing Outcomes

    Get PDF
    One difficulty in regression analysis for longitudinal data is that the outcomes are oftenmissing in a non-ignorable way (Little & Rubin, 1987). Likelihood based approaches todeal with non-ignorable missing outcomes can be divided into selection models and patternmixture models based on the way the joint distribution of the outcome and the missing-dataindicators is partitioned. One new approach from each of these two classes of models isproposed. In the first approach, a normal copula-based selection model is constructed tocombine the distribution of the outcome of interest and that of the missing-data indicatorsgiven the covariates. Parameters in the model are estimated by a pseudo maximum likelihoodmethod (Gong & Samaniego, 1981). In the second approach, a pseudo maximum likelihoodmethod introduced by Gourieroux et al. (1984) is used to estimate the identifiable parametersin a pattern mixture model. This procedure provides consistent estimators when the meanstructure is correctly specified for each pattern, with further information on the variancestructure giving an efficient estimator. A Hausman type test (Hausman, 1978) of modelmisspecification is also developed for model simplification to improve efficiency. Separatesimulations are carried out to assess the performance of the two approaches, followed byapplications to real data sets from an epidemiological cohort study investigating dementia,including Alzheimer's disease

    Inverse probability weighting for covariate adjustment in randomized studies

    Get PDF
    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented

    Estimation of treatment effect in a subpopulation: An empirical Bayes approach

    Get PDF
    It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented

    Doubly Robust Estimation of Causal Effect: Upping the Odds of Getting the Right Answers

    Get PDF
    Propensity scoreā€“based methods or multiple regressions of the outcome are often used for confounding adjustment in analysis of observational studies. In either approach, a model is needed: A model describing the relationship between the treatment assignment and covariates in the propensity scoreā€“based method or a model for the outcome and covariates in the multiple regressions. The 2 models are usually unknown to the investigators and must be estimated. The correct model specification, therefore, is essential for the validity of the final causal estimate. We describe in this article a doubly robust estimator which combines both models propitiously to offer analysts 2 chances for obtaining a valid causal estimate and demonstrate its use through a data set from the Lindner Center Study

    An empirical Bayes model using a competition score for metabolite identification in gas chromatography mass spectrometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mass spectrometry (MS) based metabolite profiling has been increasingly popular for scientific and biomedical studies, primarily due to recent technological development such as comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS). Nevertheless, the identifications of metabolites from complex samples are subject to errors. Statistical/computational approaches to improve the accuracy of the identifications and false positive estimate are in great need. We propose an empirical Bayes model which accounts for a competing score in addition to the similarity score to tackle this problem. The competition score characterizes the propensity of a candidate metabolite of being matched to some spectrum based on the metabolite's similarity score with other spectra in the library searched against. The competition score allows the model to properly assess the evidence on the presence/absence status of a metabolite based on whether or not the metabolite is matched to some sample spectrum.</p> <p>Results</p> <p>With a mixture of metabolite standards, we demonstrated that our method has better identification accuracy than other four existing methods. Moreover, our method has reliable false discovery rate estimate. We also applied our method to the data collected from the plasma of a rat and identified some metabolites from the plasma under the control of false discovery rate.</p> <p>Conclusions</p> <p>We developed an empirical Bayes model for metabolite identification and validated the method through a mixture of metabolite standards and rat plasma. The results show that our hierarchical model improves identification accuracy as compared with methods that do not structurally model the involved variables. The improvement in identification accuracy is likely to facilitate downstream analysis such as peak alignment and biomarker identification. Raw data and result matrices can be found at <url>http://www.biostat.iupui.edu/~ChangyuShen/index.htm</url></p> <p>Trial Registration</p> <p>2123938128573429</p

    Model-based peak alignment of metabolomic profiling from comprehensive two-dimensional gas chromatography mass spectrometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS) has been used for metabolite profiling in metabolomics. However, there is still much experimental variation to be controlled including both within-experiment and between-experiment variation. For efficient analysis, an ideal peak alignment method to deal with such variations is in great need.</p> <p>Results</p> <p>Using experimental data of a mixture of metabolite standards, we demonstrated that our method has better performance than other existing method which is not model-based. We then applied our method to the data generated from the plasma of a rat, which also demonstrates good performance of our model.</p> <p>Conclusions</p> <p>We developed a model-based peak alignment method to process both homogeneous and heterogeneous experimental data. The unique feature of our method is the only model-based peak alignment method coupled with metabolite identification in an unified framework. Through the comparison with other existing method, we demonstrated that our method has better performance. Data are available at <url>http://stage.louisville.edu/faculty/x0zhan17/software/software-development/mspa</url>. The R source codes are available at <url>http://www.biostat.iupui.edu/~ChangyuShen/CodesPeakAlignment.zip</url>.</p> <p>Trial Registration</p> <p>2136949528613691</p

    Subgroup selection in adaptive signature designs of confirmatory clinical trials

    Get PDF
    The increasing awareness of treatment effect heterogeneity has motivated flexible designs of confirmatory clinical trials that prospectively allow investigators to test for treatment efficacy for a subpopulation of patients in addition to the entire population. If a target subpopulation is not well characterized in the design stage, it can be developed at the end of a broad eligibility trial under an adaptive signature design. The paper proposes new procedures for subgroup selection and treatment effect estimation (for the selected subgroup) under an adaptive signature design. We first provide a simple and general characterization of the optimal subgroup that maximizes the power for demonstrating treatment efficacy or the expected gain based on a specified utility function. This characterization motivates a procedure for subgroup selection that involves prediction modelling, augmented inverse probability weighting and low dimensional maximization. A cross-validation procedure can be used to remove or reduce any resubstitution bias that may result from subgroup selection, and a bootstrap procedure can be used to make inference about the treatment effect in the subgroup selected. The approach proposed is evaluated in simulation studies and illustrated with real examples

    Spectral dependence of transmission losses in high-index polymer coated no-core fibers

    Get PDF
    A high-index polymer coated no-core fiber (PC-NCF) is effectively a depressed core fiber, where the light is guided by the anti-resonant, inhibited coupling and total internal reflection effects and the dispersion diagram shows periodic resonant and anti-resonant bands. In this paper, the transmission spectra of the straight and bent PC-NCFs (length > 5 cm) are measured and analyzed from a modal dispersion perspective. For the purpose of the study, the PC-NCFs are contained within a fiber hetero-structure using two single-mode fiber (SMF) pigtails forming a SMF-PC-NCF-SMF structure. The anti-resonance spectral characteristics are suppressed by the multimode interference in the PC-NCF with a short fiber length. The increase of the length or fiber bending (bend radius > 28 cm) can make the anti-resonance dominate and result in the periodic transmission loss dips and variations in the depth of these loss dips, due to the different modal intensity distributions in different bands and the material absorption of the polymer. The PC-NCFs are expected to be used in many devices including curvature sensors and tunable loss filters, as the experiments show that the change of loss dip around 1550 nm is over 31 dB and the average sensitivity is up to 14.77 dB/m-1 in the bend radius range from to 47.48 cm. Our study details the general principles of the effect of high-index layers in the formation of the transmission loss dips in fiber optics
    • ā€¦
    corecore